Part A: DCGAN¶
What is a DCGAN?¶
A Deep Convolutional Generative Adversarial Network (DCGAN) is a type of Generative Adversarial Network (GAN) that uses deep convolutional layers in both its Generator and Discriminator architectures. It is especially well-suited for image generation tasks.
Key Components:¶
Generator: Learns to produce realistic-looking images from random noise (latent vectors). It uses transpose convolutional layers (
Conv2DTranspose) to upsample low-dimensional noise into a full-sized image.Discriminator: Acts as a binary classifier that distinguishes between real images (from the dataset) and fake images (from the generator). It uses convolutional layers to extract features and make a real/fake prediction.
Adversarial Training:
- The generator tries to fool the discriminator by generating more realistic images.
- The discriminator tries to get better at identifying fakes.
- Over time, both networks improve through this adversarial process, resulting in high-quality generated images.
Our Implementation¶
In our case, we use one unconditional DCGAN model trained on all 16 classes of the dataset.
- Unconditional means that the generator does not take class labels as input, it only uses random noise to create images.
- As a result, the generated samples are random images from across all classes, without control over which class is produced.
Imports¶
import pandas as pd
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import (Input, Dense, Reshape, Conv2DTranspose, BatchNormalization,
Activation, Conv2D, LeakyReLU, Dropout, Flatten)
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.optimizers import Adam
Control GPU Memory Usage¶
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
import tensorflow as tf
print(tf.__version__)
print(tf.config.list_physical_devices('GPU'))
2.10.0 [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Background Research and Data Explorationn¶
Load Dataset¶
df = pd.read_csv('emnist-letters-train.csv')
df.drop_duplicates(inplace=True)
labels = df.iloc[:, 0]
images = df.iloc[:, 1:].values
Data Exploration¶
df.head()
| 24 | 0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | ... | 0.552 | 0.553 | 0.554 | 0.555 | 0.556 | 0.557 | 0.558 | 0.559 | 0.560 | 0.561 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | -2 | 142 | 142 | 142 | 142 | 142 | 142 | 142 | 142 | 142 | ... | 142 | 142 | 142 | 142 | 142 | 142 | 142 | 142 | 142 | 142 |
| 1 | 15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 2 | 14 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 3 | -2 | 120 | 120 | 120 | 120 | 120 | 120 | 120 | 120 | 120 | ... | 120 | 120 | 120 | 120 | 120 | 120 | 120 | 120 | 120 | 120 |
| 4 | -1 | 131 | 131 | 131 | 131 | 131 | 131 | 131 | 131 | 200 | ... | 131 | 131 | 131 | 131 | 131 | 131 | 131 | 131 | 131 | 131 |
5 rows × 785 columns
df.shape
(55363, 785)
df.describe()
| 24 | 0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | ... | 0.552 | 0.553 | 0.554 | 0.555 | 0.556 | 0.557 | 0.558 | 0.559 | 0.560 | 0.561 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| count | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.00000 | 55363.000000 | ... | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 | 55363.000000 |
| mean | 11.589003 | 1.787602 | 1.787602 | 1.787602 | 1.787710 | 1.791991 | 1.810921 | 1.819807 | 1.81009 | 2.259542 | ... | 1.787602 | 1.787602 | 1.787602 | 1.792479 | 1.788180 | 1.787602 | 1.787602 | 1.787602 | 1.787602 | 1.787602 |
| std | 7.467979 | 17.352303 | 17.352303 | 17.352303 | 17.368913 | 17.359561 | 17.449014 | 17.500948 | 17.45474 | 19.855490 | ... | 17.352303 | 17.352303 | 17.352303 | 17.383111 | 17.352776 | 17.352303 | 17.352303 | 17.352303 | 17.352303 | 17.352303 |
| min | -2.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.00000 | 0.000000 | ... | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
| 25% | 5.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.00000 | 0.000000 | ... | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
| 50% | 10.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.00000 | 0.000000 | ... | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
| 75% | 16.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.00000 | 0.000000 | ... | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
| max | 26.000000 | 255.000000 | 255.000000 | 255.000000 | 255.000000 | 255.000000 | 255.000000 | 255.000000 | 255.00000 | 255.000000 | ... | 255.000000 | 255.000000 | 255.000000 | 255.000000 | 255.000000 | 255.000000 | 255.000000 | 255.000000 | 255.000000 | 255.000000 |
8 rows × 785 columns
print("Unique labels:", sorted(labels.unique()))
Unique labels: [-2, -1, 1, 2, 4, 5, 6, 7, 9, 10, 12, 14, 15, 16, 17, 20, 24, 26]
import matplotlib.pyplot as plt
# Count occurrences and sort by frequency (descending)
label_counts = labels.value_counts().sort_values(ascending=False)
# Create horizontal bar chart
plt.figure(figsize=(10, 6))
bars = plt.barh(label_counts.index.astype(str), label_counts.values, color=plt.cm.tab20.colors)
# Add count values next to bars
for bar, value in zip(bars, label_counts.values):
plt.text(value + 50, bar.get_y() + bar.get_height()/2,
str(value), va='center', fontsize=9)
plt.title("Label Distribution (Descending by Frequency)", fontsize=14, fontweight='bold')
plt.xlabel("Count")
plt.ylabel("Label")
plt.gca().invert_yaxis() # Highest count at the top
plt.grid(axis='x', linestyle='--', alpha=0.7)
plt.tight_layout()
plt.show()
label_counts = labels.value_counts().sort_index()
print(label_counts)
24 -2 256 -1 521 1 3396 2 3396 4 3398 5 3437 6 3393 7 3385 9 3428 10 3402 12 3415 14 3365 15 3408 16 3430 17 3435 20 3436 24 3436 26 3426 Name: count, dtype: int64
Two significantly underrepresented labels –
- Label -1: 521 samples
- Label -2: 256 samples
Before we drop them, let us explore what they are first.
Plot sample images by class¶
# Helper function to plot images by class
def plot_class_images(class_label, n=5):
indices = df[df.iloc[:, 0] == class_label].index[:n]
plt.figure(figsize=(10, 2))
for i, idx in enumerate(indices):
img = images[idx].reshape(28, 28)
plt.subplot(1, n, i + 1)
plt.imshow(img, cmap='gray')
plt.title(f"Label: {class_label}")
plt.axis('off')
plt.tight_layout()
plt.show()
plot_class_images(-2)
plot_class_images(-1)
plot_class_images(1)
plot_class_images(2)
plot_class_images(4)
plot_class_images(5)
plot_class_images(6)
- Each label corresponds to a letter in the EMNIST dataset
- Labels -2 and -1 appear to be mostly blank or minimal marks
- Other labels (e.g., 1, 2, 4, 5, 6) show handwritten letter variations.
- Variation in handwriting style and stroke thickness may affect GAN training quality.
Filter Out Unwanted Labels¶
df = df[~labels.isin([-1, -2])].copy()
df.reset_index(drop=True, inplace=True)
labels = df.iloc[:, 0]
images = df.iloc[:, 1:].values
print("Unique labels after cleaning:", sorted(labels.unique()))
Unique labels after cleaning: [1, 2, 4, 5, 6, 7, 9, 10, 12, 14, 15, 16, 17, 20, 24, 26]
Image Average¶
import numpy as np
import matplotlib.pyplot as plt
images = images.reshape(-1, 28, 28)
labels = np.asarray(labels).astype(int)
chosen = list("ABDEFGIJLNOPQTXZ") # 16 letters
rows, cols = 2, 8
fig, ax = plt.subplots(rows, cols, figsize=(20, 5))
ax = ax.ravel()
for i, ch in enumerate(chosen):
lab = ord(ch.lower()) - 96 # 'a'->1, ..., 'z'->26
class_imgs = images[labels == lab]
avg_img = np.mean(class_imgs, axis=0) if len(class_imgs) else np.zeros((28, 28))
ax[i].imshow(avg_img, cmap='gray')
ax[i].set_title(ch, fontsize=18, fontweight='bold')
ax[i].axis('off')
# hide any leftover axes (shouldn't be any, but just in case)
for j in range(len(chosen), rows*cols):
ax[j].axis('off')
plt.tight_layout()
plt.show()
- These are the per-letter average images for the 16 target classes, and they appear blurry due to variations in handwriting styles across samples.
- Hard to distinguish the features
Feature Engineering¶
Image Preprocessing Summary¶
To prepare the EMNIST images for Conditional GAN training, the following preprocessing steps were applied:
Reshape
The flat image arrays were reshaped into 28×28 pixel format to match the original image dimensions.Rotate and Flip
The EMNIST dataset stores images in a transposed and inverted format. We corrected this by rotating each image 90 degrees counter-clockwise and flipping it horizontally to restore proper orientation.Expand Dimensions
A channel dimension was added to represent grayscale format, resulting in image shapes of (28, 28, 1) — required for convolutional neural networks.Normalize
Pixel values were scaled from the original [0, 255] range to [-1, 1], which is important for stabilizing GAN training and matching the output activation (tanh) of the generator.
# Normalize pixel values to [-1, 1]
images = (images - 127.5) / 127.5
# Reshape and fix orientation: rotate 90° clockwise then flip horizontally
images = images.reshape(-1, 28, 28)
images = np.array([np.fliplr(np.rot90(img, k=-1)) for img in images]) # Changed k=1 to k=-1: rotate 90° clockwise once.
# Reshape back to (N, 28, 28, 1)
images = images.reshape(-1, 28, 28, 1)
# Confirm shape
print("Final image shape:", images.shape)
Final image shape: (54586, 28, 28, 1)
# Helper function to plot images by class
def plot_class_images(class_label, n=5):
indices = df[df.iloc[:, 0] == class_label].index[:n]
plt.figure(figsize=(10, 2))
for i, idx in enumerate(indices):
img = images[idx].reshape(28, 28)
plt.subplot(1, n, i + 1)
plt.imshow(img, cmap='gray')
plt.title(f"Label: {class_label}")
plt.axis('off')
plt.tight_layout()
plt.show()
plot_class_images(1)
plot_class_images(2)
plot_class_images(4)
plot_class_images(5)
plot_class_images(6)
Application of DCGAN¶
from tensorflow.keras.layers import Input, Dense, Reshape, Conv2DTranspose, Conv2D, BatchNormalization, Activation
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Conv2D
Baseline DCGAN Model¶
Unconditional DCGAN Architecture — Design Choices
| Model | Layer / Operation | Output Shape | Why This Choice |
|---|---|---|---|
| Generator | Input | (latent_dim,) | Random noise vector to allow diverse image generation. |
| Dense | (7×7×128) | Large feature map for more detail capture at start. | |
| Reshape | (7, 7, 128) | Convert dense output into spatial feature maps. | |
| UpSampling2D | (14, 14, 128) | Doubles resolution while preserving learned features. | |
| Conv2D (128) | (14, 14, 128) | Adds spatial detail after upsampling. | |
| BatchNormalization | (14, 14, 128) | Stabilizes activations and improves training convergence. | |
| ReLU | (14, 14, 128) | Encourages positive activations for better feature growth. | |
| UpSampling2D | (28, 28, 128) | Final resolution upsampling to target image size. | |
| Conv2D (64) | (28, 28, 64) | Adds finer detail features. | |
| BatchNormalization | (28, 28, 64) | Keeps generator training stable. | |
| ReLU | (28, 28, 64) | Activation to enhance feature learning. | |
| Conv2D (Output) | (28, 28, channels) | Outputs grayscale image, tanh for range [-1, 1] matching preprocessing. |
| Model | Layer / Operation | Output Shape | Why This Choice |
|---|---|---|---|
| Discriminator | Input | (28, 28, channels) | Takes real or generated images for classification. |
| Conv2D (64) | (14, 14, 64) | Extracts low-level patterns while reducing resolution. | |
| LeakyReLU(0.2) | (14, 14, 64) | Avoids dead neurons, keeps gradient flow. | |
| Dropout(0.3) | (14, 14, 64) | Regularization to prevent overfitting. | |
| Conv2D (128) | (7, 7, 128) | Learns deeper, mid-level features. | |
| BatchNormalization | (7, 7, 128) | Stabilizes training and improves gradient flow. | |
| LeakyReLU(0.2) | (7, 7, 128) | Allows small negative outputs for stability. | |
| Dropout(0.3) | (7, 7, 128) | Further regularization. | |
| Flatten | (6272,) | Converts feature maps to a vector for classification. | |
| Dense (Output) | (1,) | Sigmoid outputs probability of real/fake for adversarial loss. |
def build_dcgan_generator(latent_dim, channels=1):
noise = tf.keras.Input(shape=(latent_dim,))
x = tf.keras.layers.Dense(7 * 7 * 128, activation='relu')(noise)
x = tf.keras.layers.Reshape((7, 7, 128))(x)
x = tf.keras.layers.UpSampling2D()(x)
x = tf.keras.layers.Conv2D(128, kernel_size=3, padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.ReLU()(x)
x = tf.keras.layers.UpSampling2D()(x)
x = tf.keras.layers.Conv2D(64, kernel_size=3, padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.ReLU()(x)
output_img = tf.keras.layers.Conv2D(channels, kernel_size=3, padding='same', activation='tanh')(x)
return tf.keras.Model(noise, output_img, name='DCGAN_Generator')
def build_dcgan_discriminator(img_shape):
img = tf.keras.Input(shape=img_shape)
x = tf.keras.layers.Conv2D(64, kernel_size=3, strides=2, padding='same')(img)
x = tf.keras.layers.LeakyReLU(0.2)(x)
x = tf.keras.layers.Dropout(0.3)(x)
x = tf.keras.layers.Conv2D(128, kernel_size=3, strides=2, padding='same')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(0.2)(x)
x = tf.keras.layers.Dropout(0.3)(x)
x = tf.keras.layers.Flatten()(x)
output = tf.keras.layers.Dense(1, activation='sigmoid')(x)
return tf.keras.Model(img, output, name='DCGAN_Discriminator')
latent_dim = 150
# 150 is a balanced choice
# The size (latent_dim) determines how much variety and detail the generator can encode before mapping it to an image.
# Too small: The generator doesn’t have enough “creative capacity” to capture complex variations in your data.
# Risk: Images might look repetitive or overly simplistic.
# More parameters to learn → training can be slower and harder to stabilize.
# Risk: The generator may overfit or produce noisy artifacts.
img_shape = (28, 28, 1)
Data Preparation — Unconditional DCGAN
Purpose: Create a TensorFlow dataset pipeline for training the DCGAN on all images (no labels used).
Steps:
tf.data.Dataset.from_tensor_slices(images)→ loads the image array into a TensorFlow dataset..shuffle(buffer_size=10000)→ randomizes the order of images to prevent the model from memorizing sequence patterns..batch(batch_size=64)→ groups images into mini-batches of 64 for efficient GPU processing..prefetch(tf.data.AUTOTUNE)→ overlaps data loading with training to improve performance.
Key Parameters:
batch_size = 64: balances training stability and memory usage.buffer_size = 10000: ensures strong shuffling for better generalization.
# ==== Data Preparation ====
import tensorflow as tf
batch_size = 64
dataset = tf.data.Dataset.from_tensor_slices(images) \
.shuffle(buffer_size=10000) \
.batch(batch_size) \
.prefetch(tf.data.AUTOTUNE)
DCGANModel Class — Unconditional DCGAN
Purpose:
Customtf.keras.Modelsubclass to encapsulate Generator and Discriminator training in a single class for adversarial learning.Key Components:
generator,discriminator: the two neural networks of the DCGAN.latent_dim: size of the noise vector fed into the generator.loss_fn: Binary Crossentropy (withfrom_logits=Falsesince D usessigmoidoutput).d_accuracy_metric: tracks discriminator accuracy across batches.
Loss Functions:
- Generator Loss:
generator_loss(fake_output)→ compares D’s prediction on fakes to target1(wants fakes to be classified as real). - Discriminator Loss:
discriminator_loss(real_output, fake_output)→- Real images: smoothed label
0.9to reduce overconfidence. - Fake images: target
0.
- Real images: smoothed label
- Generator Loss:
Training Step (
train_step):- Generate noise → feed into G to create fake images.
- Forward pass through D with real and fake images.
- Compute losses:
d_lossfor D,g_lossfor G. - Backpropagate & update:
- D updated with
d_loss. - G updated with
g_loss.
- D updated with
- Accuracy calculation:
- Real accuracy: % of real images classified as real.
- Fake accuracy: % of fake images classified as fake.
- Final
d_accuracy= average of real & fake accuracies.
Return Values:
d_loss: Discriminator loss for the batch.g_loss: Generator loss for the batch.d_accuracy: Discriminator accuracy for the batch.
# ==== DCGAN Class ====
class DCGANModel(tf.keras.Model):
def __init__(self, generator, discriminator, latent_dim):
super().__init__()
self.generator = generator # The generator network
self.discriminator = discriminator # The discriminator network
self.latent_dim = latent_dim # Size of the random noise vector
self.loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=False) # Binary cross-entropy loss
self.d_accuracy_metric = tf.keras.metrics.Mean(name="d_accuracy") # Metric to track discriminator accuracy
def compile(self, g_optimizer, d_optimizer):
super().compile()
self.g_optimizer = g_optimizer # Optimizer for generator
self.d_optimizer = d_optimizer # Optimizer for discriminator
# Loss for generator: wants fake images to be classified as real (label = 1)
def generator_loss(self, fake_output):
return self.loss_fn(tf.ones_like(fake_output), fake_output)
# Loss for discriminator: real images → label = 1, fake images → label = 0
def discriminator_loss(self, real_output, fake_output):
real_loss = self.loss_fn(tf.ones_like(real_output), real_output) # No label smoothing
fake_loss = self.loss_fn(tf.zeros_like(fake_output), fake_output)
return real_loss + fake_loss
# One training step for both networks
def train_step(self, real_images):
batch_size = tf.shape(real_images)[0] # Get current batch size
noise = tf.random.normal([batch_size, self.latent_dim]) # Random noise for generator input
with tf.GradientTape() as disc_tape, tf.GradientTape() as gen_tape:
# Generate fake images
generated_images = self.generator(noise, training=True)
# Discriminator predictions on real and fake images
real_output = self.discriminator(real_images, training=True)
fake_output = self.discriminator(generated_images, training=True)
# Calculate losses
d_loss = self.discriminator_loss(real_output, fake_output)
g_loss = self.generator_loss(fake_output)
# Compute gradients for both networks
grads_d = disc_tape.gradient(d_loss, self.discriminator.trainable_variables)
grads_g = gen_tape.gradient(g_loss, self.generator.trainable_variables)
# Apply gradients (update weights)
self.d_optimizer.apply_gradients(zip(grads_d, self.discriminator.trainable_variables))
self.g_optimizer.apply_gradients(zip(grads_g, self.generator.trainable_variables))
# Calculate discriminator accuracy
real_accuracy = tf.reduce_mean(tf.cast(real_output > 0.5, tf.float32))
fake_accuracy = tf.reduce_mean(tf.cast(fake_output < 0.5, tf.float32))
d_accuracy = 0.5 * (real_accuracy + fake_accuracy)
# Return metrics for monitoring
return {
"d_loss": d_loss, # Discriminator loss
"g_loss": g_loss, # Generator loss
"d_accuracy": d_accuracy # Discriminator accuracy
}
import numpy as np
history = {
"d_loss": [],
"g_loss": [],
"d_accuracy": []
}
Model Initialization and Compilation¶
Once the Generator and Discriminator architectures are defined, the next step is to:
- Build each network.
- Combine them into our custom
DCGANModelclass. - Compile the DCGAN with separate optimizers for each part.
Build the Generator and Discriminator¶
- Generator: Takes a random noise vector of size
latent_dimand produces a synthetic image. - Discriminator: Takes an image (
img_shape) and predicts whether it is real or fake.
generator = build_dcgan_generator(latent_dim)
discriminator = build_dcgan_discriminator(img_shape)
Building and Compiling the Unconditional DCGAN
Build Models:
generator = build_dcgan_generator(latent_dim)→ creates the generator network to produce28×28grayscale images from random noise of sizelatent_dim.discriminator = build_dcgan_discriminator(img_shape)→ creates the discriminator network to classify images (realorfake) with input shapeimg_shape.
Create DCGAN Model:
dcgan = DCGANModel(generator, discriminator, latent_dim)→ wraps both networks into a single training class that handles adversarial training.
Compile Model:
- Optimizers:
Adamwithlearning_rate=0.0002andbeta_1=0.5for both G & D — standard DCGAN settings for stable training.
g_optimizer: updates the generator to better fool the discriminator.d_optimizer: updates the discriminator to better distinguish real from fake images.
- Optimizers:
# === Build generator and discriminator ===
generator = build_dcgan_generator(latent_dim)
discriminator = build_dcgan_discriminator(img_shape)
# === Create DCGAN model ===
dcgan = DCGANModel(generator, discriminator, latent_dim)
# === Compile the model ===
dcgan.compile(
g_optimizer=tf.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5),
d_optimizer=tf.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5)
)
# Print the architecture summaries
print("=== Generator Architecture ===")
generator.summary()
print("\n=== Discriminator Architecture ===")
discriminator.summary()
=== Generator Architecture ===
Model: "DCGAN_Generator"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 150)] 0
dense_2 (Dense) (None, 6272) 947072
reshape_1 (Reshape) (None, 7, 7, 128) 0
up_sampling2d_2 (UpSampling (None, 14, 14, 128) 0
2D)
conv2d_5 (Conv2D) (None, 14, 14, 128) 147584
batch_normalization_3 (Batc (None, 14, 14, 128) 512
hNormalization)
re_lu_2 (ReLU) (None, 14, 14, 128) 0
up_sampling2d_3 (UpSampling (None, 28, 28, 128) 0
2D)
conv2d_6 (Conv2D) (None, 28, 28, 64) 73792
batch_normalization_4 (Batc (None, 28, 28, 64) 256
hNormalization)
re_lu_3 (ReLU) (None, 28, 28, 64) 0
conv2d_7 (Conv2D) (None, 28, 28, 1) 577
=================================================================
Total params: 1,169,793
Trainable params: 1,169,409
Non-trainable params: 384
_________________________________________________________________
=== Discriminator Architecture ===
Model: "DCGAN_Discriminator"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) [(None, 28, 28, 1)] 0
conv2d_8 (Conv2D) (None, 14, 14, 64) 640
leaky_re_lu_2 (LeakyReLU) (None, 14, 14, 64) 0
dropout_2 (Dropout) (None, 14, 14, 64) 0
conv2d_9 (Conv2D) (None, 7, 7, 128) 73856
batch_normalization_5 (Batc (None, 7, 7, 128) 512
hNormalization)
leaky_re_lu_3 (LeakyReLU) (None, 7, 7, 128) 0
dropout_3 (Dropout) (None, 7, 7, 128) 0
flatten_1 (Flatten) (None, 6272) 0
dense_3 (Dense) (None, 1) 6273
=================================================================
Total params: 81,281
Trainable params: 81,025
Non-trainable params: 256
_________________________________________________________________
import matplotlib.pyplot as plt
import numpy as np
def show_160_images(generator, latent_dim=150):
"""
Generates and displays 160 images from a trained DCGAN generator
in a 16×10 grid layout.
"""
# === 1. Generate random noise for the generator ===
# Shape: [160 samples, latent_dim size]
noise = tf.random.normal([160, latent_dim])
# === 2. Generate fake images from noise ===
# training=False → ensures layers like BatchNorm work in inference mode
generated_images = generator(noise, training=False)
# === 3. Normalize from [-1, 1] → [0, 1] for display ===
generated_images = (generated_images + 1) / 2.0
# === 4. Create a matplotlib figure for the grid ===
fig = plt.figure(figsize=(16, 10)) # width=16, height=10 inches
# === 5. Loop over all 160 generated images ===
for i in range(160):
# Add a subplot in a 10-row × 16-column grid
plt.subplot(10, 16, i + 1)
# Display the image (take channel 0 since grayscale)
plt.imshow(generated_images[i, :, :, 0], cmap='gray')
# Remove axes ticks and labels for a cleaner look
plt.axis('off')
# === 6. Add a title above the grid ===
plt.suptitle("160 Generated Images", fontsize=16)
# === 7. Adjust spacing to fit images nicely ===
plt.tight_layout()
# === 8. Show the figure ===
plt.show()
# === 9. Close the figure to free memory ===
plt.close(fig)
EPOCHS = 50 # Total number of training epochs
for epoch in range(EPOCHS):
print(f"\nEpoch {epoch+1}/{EPOCHS}")
# Lists to store metrics for each batch in this epoch
d_losses, g_losses, d_accuracies = [], [], []
# === Iterate over all batches in the dataset ===
for batch in dataset:
# Train once on the current batch
metrics = dcgan.train_step(batch)
# Store batch metrics (convert from tensors to NumPy values)
d_losses.append(metrics["d_loss"].numpy()) # Discriminator loss
g_losses.append(metrics["g_loss"].numpy()) # Generator loss
d_accuracies.append(metrics["d_accuracy"].numpy()) # Discriminator accuracy
# === Compute average metrics for the whole epoch ===
avg_d_loss = np.mean(d_losses)
avg_g_loss = np.mean(g_losses)
avg_d_accuracy = np.mean(d_accuracies)
# Store averages in the training history dictionary
history["d_loss"].append(avg_d_loss)
history["g_loss"].append(avg_g_loss)
history["d_accuracy"].append(avg_d_accuracy)
# Print the epoch’s average metrics
print(f"D Loss: {avg_d_loss:.4f} | G Loss: {avg_g_loss:.4f} | D Acc: {avg_d_accuracy:.4f}")
# === Show generated samples every 10 epochs ===
if (epoch + 1) % 10 == 0:
show_160_images(dcgan.generator, latent_dim=latent_dim)
Epoch 1/50 D Loss: 1.3095 | G Loss: 1.0274 | D Acc: 0.6242 Epoch 2/50 D Loss: 1.2657 | G Loss: 1.0378 | D Acc: 0.6455 Epoch 3/50 D Loss: 1.2772 | G Loss: 1.0084 | D Acc: 0.6354 Epoch 4/50 D Loss: 1.3048 | G Loss: 0.9728 | D Acc: 0.6141 Epoch 5/50 D Loss: 1.3389 | G Loss: 0.9281 | D Acc: 0.5864 Epoch 6/50 D Loss: 1.3459 | G Loss: 0.9055 | D Acc: 0.5772 Epoch 7/50 D Loss: 1.3512 | G Loss: 0.8908 | D Acc: 0.5700 Epoch 8/50 D Loss: 1.3528 | G Loss: 0.8831 | D Acc: 0.5677 Epoch 9/50 D Loss: 1.3539 | G Loss: 0.8751 | D Acc: 0.5658 Epoch 10/50 D Loss: 1.3532 | G Loss: 0.8707 | D Acc: 0.5650
Epoch 11/50 D Loss: 1.3525 | G Loss: 0.8703 | D Acc: 0.5653 Epoch 12/50 D Loss: 1.3495 | G Loss: 0.8715 | D Acc: 0.5683 Epoch 13/50 D Loss: 1.3532 | G Loss: 0.8676 | D Acc: 0.5650 Epoch 14/50 D Loss: 1.3536 | G Loss: 0.8661 | D Acc: 0.5649 Epoch 15/50 D Loss: 1.3540 | G Loss: 0.8631 | D Acc: 0.5627 Epoch 16/50 D Loss: 1.3530 | G Loss: 0.8631 | D Acc: 0.5641 Epoch 17/50 D Loss: 1.3549 | G Loss: 0.8611 | D Acc: 0.5610 Epoch 18/50 D Loss: 1.3535 | G Loss: 0.8609 | D Acc: 0.5619 Epoch 19/50 D Loss: 1.3523 | G Loss: 0.8617 | D Acc: 0.5637 Epoch 20/50 D Loss: 1.3528 | G Loss: 0.8627 | D Acc: 0.5628
Epoch 21/50 D Loss: 1.3508 | G Loss: 0.8632 | D Acc: 0.5650 Epoch 22/50 D Loss: 1.3519 | G Loss: 0.8626 | D Acc: 0.5649 Epoch 23/50 D Loss: 1.3529 | G Loss: 0.8611 | D Acc: 0.5640 Epoch 24/50 D Loss: 1.3533 | G Loss: 0.8602 | D Acc: 0.5609 Epoch 25/50 D Loss: 1.3520 | G Loss: 0.8618 | D Acc: 0.5650 Epoch 26/50 D Loss: 1.3513 | G Loss: 0.8610 | D Acc: 0.5648 Epoch 27/50 D Loss: 1.3505 | G Loss: 0.8616 | D Acc: 0.5642 Epoch 28/50 D Loss: 1.3513 | G Loss: 0.8617 | D Acc: 0.5657 Epoch 29/50 D Loss: 1.3503 | G Loss: 0.8629 | D Acc: 0.5672 Epoch 30/50 D Loss: 1.3510 | G Loss: 0.8630 | D Acc: 0.5644
Epoch 31/50 D Loss: 1.3507 | G Loss: 0.8630 | D Acc: 0.5638 Epoch 32/50 D Loss: 1.3506 | G Loss: 0.8622 | D Acc: 0.5631 Epoch 33/50 D Loss: 1.3487 | G Loss: 0.8636 | D Acc: 0.5678 Epoch 34/50 D Loss: 1.3493 | G Loss: 0.8635 | D Acc: 0.5635 Epoch 35/50 D Loss: 1.3480 | G Loss: 0.8644 | D Acc: 0.5677 Epoch 36/50 D Loss: 1.3485 | G Loss: 0.8648 | D Acc: 0.5670 Epoch 37/50 D Loss: 1.3490 | G Loss: 0.8638 | D Acc: 0.5673 Epoch 38/50 D Loss: 1.3480 | G Loss: 0.8656 | D Acc: 0.5682 Epoch 39/50 D Loss: 1.3484 | G Loss: 0.8646 | D Acc: 0.5661 Epoch 40/50 D Loss: 1.3489 | G Loss: 0.8655 | D Acc: 0.5682
Epoch 41/50 D Loss: 1.3466 | G Loss: 0.8666 | D Acc: 0.5693 Epoch 42/50 D Loss: 1.3468 | G Loss: 0.8682 | D Acc: 0.5711 Epoch 43/50 D Loss: 1.3469 | G Loss: 0.8669 | D Acc: 0.5663 Epoch 44/50 D Loss: 1.3481 | G Loss: 0.8663 | D Acc: 0.5680 Epoch 45/50 D Loss: 1.3451 | G Loss: 0.8688 | D Acc: 0.5714 Epoch 46/50 D Loss: 1.3456 | G Loss: 0.8687 | D Acc: 0.5693 Epoch 47/50 D Loss: 1.3460 | G Loss: 0.8689 | D Acc: 0.5697 Epoch 48/50 D Loss: 1.3462 | G Loss: 0.8688 | D Acc: 0.5719 Epoch 49/50 D Loss: 1.3466 | G Loss: 0.8680 | D Acc: 0.5691 Epoch 50/50 D Loss: 1.3467 | G Loss: 0.8689 | D Acc: 0.5693
import matplotlib.pyplot as plt
import numpy as np
epochs_range = np.arange(1, len(history["d_loss"])+1)
# --- Loss curves ---
plt.figure(figsize=(8,4))
plt.plot(epochs_range, history["d_loss"], label="Discriminator Loss")
plt.plot(epochs_range, history["g_loss"], label="Generator Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.title("GAN Training Loss")
plt.legend()
plt.grid(True, linestyle="--", alpha=0.4)
plt.tight_layout()
plt.show()
# --- Discriminator accuracy ---
plt.figure(figsize=(8,4))
plt.plot(epochs_range, history["d_accuracy"], label="Discriminator Accuracy")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.title("Discriminator Accuracy Over Epochs")
plt.ylim(0, 1) # since it's an accuracy
plt.grid(True, linestyle="--", alpha=0.4)
plt.tight_layout()
plt.show()
import os
import matplotlib.pyplot as plt
import tensorflow as tf
def show_160_images(generator, latent_dim=128, save_path="DCGAN(1)generated_images/generated_160_grid.png"):
# Ensure the save directory exists
os.makedirs(os.path.dirname(save_path), exist_ok=True)
# Generate 160 noise vectors
noise = tf.random.normal([160, latent_dim]) # Noise = diversity
# Generate fake images from noise
generated_images = generator(noise, training=False)
# Normalize generated images from [-1, 1] to [0, 1] for display
generated_images = (generated_images + 1.0) / 2.0
# Create a 16x10 grid of images
fig = plt.figure(figsize=(16, 10))
for i in range(160):
plt.subplot(10, 16, i + 1)
plt.imshow(generated_images[i, :, :, 0], cmap='gray')
plt.axis('off')
plt.suptitle("Generated 160 Images", fontsize=16)
plt.tight_layout()
plt.savefig(save_path)
plt.show()
plt.close(fig)
show_160_images(generator, latent_dim=150)
s
DCGAN Model Improvement - 500 epochs¶
Model Improvement: From Baseline DCGAN to Enhanced DCGAN¶
We upgraded both the Generator and Discriminator to improve sample quality and training stability.
What Changed in the Generator¶
Before (baseline):
- Dense → reshape to 7×7×128
- 2×
UpSampling2D + Conv2Dstacks (128 → 64) - Output:
Conv2D(..., activation='tanh')
Now (improved):
- Dense → reshape to 7×7×256 ⟶ more capacity
- Two Conv2DTranspose (stride=2) upsampling blocks:
256 → 128 → 28×28
- Extra refinement block (no upsampling):
Conv2DTranspose(64, stride=1)to add detail at full resolution
- BatchNorm(momentum=0.8) after each upsampling
- Output:
Conv2D(...); tanh
Why this helps
- Transposed conv replaces
UpSampling2D+Conv2D: learns the upsampling kernel → better textures and fewer checkerboard artifacts when kernels/strides are set consistently. - Higher channel width (256) at low resolution gives the model more representational power.
- Refinement at 28×28 lets the network sharpen edges and fill fine details after reaching the target size.
- BatchNorm(momentum=0.8) stabilizes training and speeds convergence in GANs.
What Changed in the Discriminator¶
Before (baseline):
- Two downsampling blocks:
Conv2D(64, s=2)→Conv2D(128, s=2) - BatchNorm only in the second block
- Dropout(0.3)
- Flatten → Dense(1, sigmoid)
Now (improved):
- Same first two blocks, but:
- BatchNorm added consistently
- Dropout increased to 0.4 for stronger regularization
- Added third conv block at full resolution:
Conv2D(256, stride=1)+ BatchNorm + LeakyReLU + Dropout
- Flatten → Dense(1, sigmoid)
Why this helps
- Extra conv block (stride=1) increases capacity to inspect local details at the native resolution, making the discriminator more discerning without overly reducing spatial size.
- Stronger dropout reduces overfitting and prevents the discriminator from overpowering the generator early.
- Consistent BatchNorm improves gradient flow and stability.
def build_generator(latent_dim, channels=1):
model = Sequential()
model.add(Dense(256 * 7 * 7, activation="relu", input_dim=latent_dim))
model.add(Reshape((7, 7, 256)))
model.add(Conv2DTranspose(256, kernel_size=3, strides=2, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(Conv2DTranspose(128, kernel_size=3, strides=2, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(Conv2DTranspose(64, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(Conv2D(channels, kernel_size=3, padding="same"))
model.add(Activation("tanh"))
noise = Input(shape=(latent_dim,))
img = model(noise)
return Model(noise, img)
def build_discriminator(img_shape):
model = Sequential()
model.add(Conv2D(64, kernel_size=3, strides=2, padding="same", input_shape=img_shape))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.4))
model.add(Conv2D(128, kernel_size=3, strides=2, padding="same"))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.4))
model.add(Conv2D(256, kernel_size=3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
img = Input(shape=img_shape)
validity = model(img)
return Model(img, validity)
# ==== Data Preparation ====
import tensorflow as tf
batch_size = 64
buffer_size = 10000
# DCGAN Dataset (images only)
dcgan_dataset = (tf.data.Dataset.from_tensor_slices(images)
.shuffle(buffer_size, reshuffle_each_iteration=True)
.batch(batch_size, drop_remainder=True)
.prefetch(tf.data.AUTOTUNE))
class DCGANModel(tf.keras.Model):
def __init__(self, generator, discriminator, latent_dim):
super().__init__()
self.generator = generator
self.discriminator = discriminator
self.latent_dim = latent_dim
self.loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=False)
def compile(self, g_optimizer, d_optimizer):
super().compile()
self.g_optimizer = g_optimizer
self.d_optimizer = d_optimizer
def generator_loss(self, fake_output):
return self.loss_fn(tf.ones_like(fake_output), fake_output)
def discriminator_loss(self, real_output, fake_output):
real_loss = self.loss_fn(tf.ones_like(real_output) * 0.9, real_output) # label smoothing
fake_loss = self.loss_fn(tf.zeros_like(fake_output), fake_output)
return real_loss + fake_loss
@tf.function
def train_step(self, real_images):
batch_size = tf.shape(real_images)[0]
noise = tf.random.normal([batch_size, self.latent_dim])
with tf.GradientTape() as disc_tape, tf.GradientTape() as gen_tape:
generated_images = self.generator(noise, training=True)
real_output = self.discriminator(real_images, training=True)
fake_output = self.discriminator(generated_images, training=True)
d_loss = self.discriminator_loss(real_output, fake_output)
g_loss = self.generator_loss(fake_output)
grads_d = disc_tape.gradient(d_loss, self.discriminator.trainable_variables)
grads_g = gen_tape.gradient(g_loss, self.generator.trainable_variables)
self.d_optimizer.apply_gradients(zip(grads_d, self.discriminator.trainable_variables))
self.g_optimizer.apply_gradients(zip(grads_g, self.generator.trainable_variables))
# === Discriminator Accuracy ===
real_accuracy = tf.reduce_mean(tf.cast(real_output > 0.5, tf.float32))
fake_accuracy = tf.reduce_mean(tf.cast(fake_output < 0.5, tf.float32))
d_accuracy = 0.5 * (real_accuracy + fake_accuracy)
return {"d_loss": d_loss, "g_loss": g_loss, "d_accuracy": d_accuracy}
def save_all_weights(self, prefix="dcgan"):
self.generator.save_weights(f"{prefix}_generator_500.h5")
self.discriminator.save_weights(f"{prefix}_discriminator.h5")
self.save_weights(f"{prefix}_combined.h5")
print("All weights saved.")
def load_all_weights(self, prefix="dcgan"):
self.generator.load_weights(f"{prefix}_generator_500.h5")
self.discriminator.load_weights(f"{prefix}_discriminator.h5")
self.load_weights(f"{prefix}_combined.h5")
print("All weights loaded.")
# ==== Instantiate Models ====
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import (
Conv2D, Dense, Flatten, Dropout, LeakyReLU, BatchNormalization,
Reshape, Conv2DTranspose, Input, Activation
)
latent_dim = 128
generator = build_generator(latent_dim, channels=1)
discriminator = build_discriminator(img_shape=(28, 28, 1))
# Confirm model architecture
generator.summary()
discriminator.summary()
dcgan = DCGANModel(generator, discriminator, latent_dim)
dcgan.compile(
g_optimizer=tf.keras.optimizers.Adam(0.0002, beta_1=0.5),
d_optimizer=tf.keras.optimizers.Adam(0.0002, beta_1=0.5)
)
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) [(None, 128)] 0
sequential_4 (Sequential) (None, 28, 28, 1) 2579457
=================================================================
Total params: 2,579,457
Trainable params: 2,578,561
Non-trainable params: 896
_________________________________________________________________
Model: "model_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) [(None, 28, 28, 1)] 0
sequential_5 (Sequential) (None, 1) 383745
=================================================================
Total params: 383,745
Trainable params: 382,977
Non-trainable params: 768
_________________________________________________________________
# ==== Callback for Monitoring ====
import os
import matplotlib.pyplot as plt
import tensorflow as tf
class GANMonitor(tf.keras.callbacks.Callback):
def __init__(self, num_images=25, latent_dim=128, save_dir="DCGAN_500_epochs_generated_images", save_interval=50):
super().__init__()
self.num_images = num_images
self.latent_dim = latent_dim
self.save_dir = save_dir
self.save_interval = save_interval
self.seed = tf.random.normal([num_images, latent_dim])
os.makedirs(self.save_dir, exist_ok=True)
def on_epoch_end(self, epoch, logs=None):
# Save images only every `save_interval` epochs
if (epoch + 1) % self.save_interval == 0:
generated_images = self.model.generator(self.seed, training=False)
generated_images = (generated_images + 1) / 2.0 # Rescale to [0,1]
fig = plt.figure(figsize=(5, 5))
for i in range(self.num_images):
plt.subplot(5, 5, i + 1)
plt.imshow(generated_images[i, :, :, 0], cmap='gray')
plt.axis('off')
plt.suptitle(f"Epoch {epoch + 1}")
save_path = os.path.join(self.save_dir, f"generated_epoch_{epoch + 1}.png")
plt.savefig(save_path)
plt.close(fig)
def generate_and_save_all(generator, latent_dim=128, num_classes=16, images_per_class=10,
save_dir="DCGAN(1)_generated_images", grid_path="DCGAN(4)generated_160_grid.png"):
os.makedirs(save_dir, exist_ok=True)
all_images = []
for i in range(num_classes):
noise = np.random.normal(0, 1, (images_per_class, latent_dim))
generated_images = generator(noise, training=False)
generated_images = 0.5 * generated_images + 0.5 # Rescale to [0,1]
# Optional: Save individual images
class_dir = os.path.join(save_dir, f"group_{i}")
os.makedirs(class_dir, exist_ok=True)
for j in range(images_per_class):
img = generated_images[j, :, :, 0]
plt.imsave(os.path.join(class_dir, f"img_{j+1}.png"), img, cmap='gray')
all_images.extend(generated_images[:, :, :, 0])
# Save 160-image grid
plt.figure(figsize=(20, 32))
for i, img in enumerate(all_images):
plt.subplot(num_classes, images_per_class, i + 1)
plt.imshow(img, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.savefig(grid_path)
plt.show()
import tensorflow as tf
epochs = 500
g_losses = []
d_losses = []
d_accuracies = []
monitor = GANMonitor(num_images=25, latent_dim=latent_dim, save_dir="DCGAN_500_epochs_generated_images", save_interval=50)
monitor.model = dcgan
# ========== Training Loop ==========
for epoch in range(epochs):
for real_images in dataset:
logs = dcgan.train_step(real_images)
g_losses.append(logs["g_loss"].numpy())
d_losses.append(logs["d_loss"].numpy())
d_accuracies.append(logs["d_accuracy"].numpy())
monitor.on_epoch_end(epoch)
print(f"Epoch {epoch + 1}/{epochs} - G Loss: {g_losses[-1]:.4f}, D Loss: {d_losses[-1]:.4f}, D Accuracy: {d_accuracies[-1]:.4f}")
# Save final generator weights
generator.save_weights("dcgan_generator_500_weights.h5")
Epoch 1/500 - G Loss: 1.0601, D Loss: 1.2520, D Accuracy: 0.6724 Epoch 2/500 - G Loss: 1.1210, D Loss: 1.2584, D Accuracy: 0.6552 Epoch 3/500 - G Loss: 1.0861, D Loss: 1.2190, D Accuracy: 0.6724 Epoch 4/500 - G Loss: 0.8848, D Loss: 1.3609, D Accuracy: 0.5690 Epoch 5/500 - G Loss: 0.9465, D Loss: 1.3604, D Accuracy: 0.5776 Epoch 6/500 - G Loss: 1.0735, D Loss: 1.3156, D Accuracy: 0.5776 Epoch 7/500 - G Loss: 0.8847, D Loss: 1.3142, D Accuracy: 0.6207 Epoch 8/500 - G Loss: 0.9727, D Loss: 1.3750, D Accuracy: 0.5517 Epoch 9/500 - G Loss: 1.0522, D Loss: 1.2275, D Accuracy: 0.6810 Epoch 10/500 - G Loss: 0.8431, D Loss: 1.3326, D Accuracy: 0.5948 Epoch 11/500 - G Loss: 0.8105, D Loss: 1.2947, D Accuracy: 0.6121 Epoch 12/500 - G Loss: 0.8286, D Loss: 1.4070, D Accuracy: 0.4914 Epoch 13/500 - G Loss: 0.9532, D Loss: 1.3493, D Accuracy: 0.5776 Epoch 14/500 - G Loss: 1.0426, D Loss: 1.2306, D Accuracy: 0.6466 Epoch 15/500 - G Loss: 1.0106, D Loss: 1.2499, D Accuracy: 0.6638 Epoch 16/500 - G Loss: 0.9415, D Loss: 1.4207, D Accuracy: 0.4828 Epoch 17/500 - G Loss: 1.1509, D Loss: 1.3346, D Accuracy: 0.5345 Epoch 18/500 - G Loss: 0.9531, D Loss: 1.3060, D Accuracy: 0.6552 Epoch 19/500 - G Loss: 0.9817, D Loss: 1.4275, D Accuracy: 0.5431 Epoch 20/500 - G Loss: 0.9468, D Loss: 1.2752, D Accuracy: 0.6207 Epoch 21/500 - G Loss: 0.7788, D Loss: 1.4232, D Accuracy: 0.5345 Epoch 22/500 - G Loss: 1.0108, D Loss: 1.2942, D Accuracy: 0.6034 Epoch 23/500 - G Loss: 1.0348, D Loss: 1.2182, D Accuracy: 0.7069 Epoch 24/500 - G Loss: 0.8160, D Loss: 1.3859, D Accuracy: 0.6121 Epoch 25/500 - G Loss: 0.9491, D Loss: 1.3138, D Accuracy: 0.6293 Epoch 26/500 - G Loss: 0.8618, D Loss: 1.2542, D Accuracy: 0.6983 Epoch 27/500 - G Loss: 0.9760, D Loss: 1.2732, D Accuracy: 0.6466 Epoch 28/500 - G Loss: 1.0965, D Loss: 1.3505, D Accuracy: 0.5862 Epoch 29/500 - G Loss: 0.8344, D Loss: 1.3752, D Accuracy: 0.5517 Epoch 30/500 - G Loss: 1.0175, D Loss: 1.2749, D Accuracy: 0.6293 Epoch 31/500 - G Loss: 0.8542, D Loss: 1.3448, D Accuracy: 0.6466 Epoch 32/500 - G Loss: 1.0236, D Loss: 1.3222, D Accuracy: 0.5690 Epoch 33/500 - G Loss: 1.1052, D Loss: 1.2453, D Accuracy: 0.6552 Epoch 34/500 - G Loss: 0.8159, D Loss: 1.3034, D Accuracy: 0.6121 Epoch 35/500 - G Loss: 0.7779, D Loss: 1.3260, D Accuracy: 0.6379 Epoch 36/500 - G Loss: 1.1109, D Loss: 1.2528, D Accuracy: 0.6121 Epoch 37/500 - G Loss: 1.0065, D Loss: 1.2769, D Accuracy: 0.6293 Epoch 38/500 - G Loss: 0.7533, D Loss: 1.3493, D Accuracy: 0.6034 Epoch 39/500 - G Loss: 0.8876, D Loss: 1.2583, D Accuracy: 0.6293 Epoch 40/500 - G Loss: 0.9343, D Loss: 1.3052, D Accuracy: 0.5948 Epoch 41/500 - G Loss: 0.8979, D Loss: 1.2698, D Accuracy: 0.6810 Epoch 42/500 - G Loss: 1.1338, D Loss: 1.2206, D Accuracy: 0.6638 Epoch 43/500 - G Loss: 1.2829, D Loss: 1.1917, D Accuracy: 0.6379 Epoch 44/500 - G Loss: 0.8044, D Loss: 1.3502, D Accuracy: 0.6379 Epoch 45/500 - G Loss: 1.0726, D Loss: 1.2801, D Accuracy: 0.6552 Epoch 46/500 - G Loss: 0.9280, D Loss: 1.5097, D Accuracy: 0.4655 Epoch 47/500 - G Loss: 0.9093, D Loss: 1.3170, D Accuracy: 0.5603 Epoch 48/500 - G Loss: 1.2663, D Loss: 1.0972, D Accuracy: 0.7586 Epoch 49/500 - G Loss: 1.0334, D Loss: 1.2582, D Accuracy: 0.6552 Epoch 50/500 - G Loss: 1.0157, D Loss: 1.2093, D Accuracy: 0.7069 Epoch 51/500 - G Loss: 0.8555, D Loss: 1.3605, D Accuracy: 0.6121 Epoch 52/500 - G Loss: 1.1959, D Loss: 1.3776, D Accuracy: 0.6207 Epoch 53/500 - G Loss: 1.1221, D Loss: 1.2789, D Accuracy: 0.6552 Epoch 54/500 - G Loss: 0.8559, D Loss: 1.2261, D Accuracy: 0.7414 Epoch 55/500 - G Loss: 1.0335, D Loss: 1.2684, D Accuracy: 0.6552 Epoch 56/500 - G Loss: 1.0704, D Loss: 1.3091, D Accuracy: 0.6207 Epoch 57/500 - G Loss: 0.9574, D Loss: 1.3219, D Accuracy: 0.5431 Epoch 58/500 - G Loss: 1.1287, D Loss: 1.2238, D Accuracy: 0.6552 Epoch 59/500 - G Loss: 1.0343, D Loss: 1.3766, D Accuracy: 0.5690 Epoch 60/500 - G Loss: 1.3988, D Loss: 1.1032, D Accuracy: 0.6983 Epoch 61/500 - G Loss: 1.0016, D Loss: 1.3161, D Accuracy: 0.6293 Epoch 62/500 - G Loss: 1.2455, D Loss: 1.1371, D Accuracy: 0.6638 Epoch 63/500 - G Loss: 1.0443, D Loss: 1.2445, D Accuracy: 0.6638 Epoch 64/500 - G Loss: 0.9189, D Loss: 1.3052, D Accuracy: 0.6293 Epoch 65/500 - G Loss: 0.9656, D Loss: 1.2138, D Accuracy: 0.6810 Epoch 66/500 - G Loss: 0.9322, D Loss: 1.2921, D Accuracy: 0.6293 Epoch 67/500 - G Loss: 1.3319, D Loss: 1.1454, D Accuracy: 0.7069 Epoch 68/500 - G Loss: 1.3751, D Loss: 1.1099, D Accuracy: 0.7155 Epoch 69/500 - G Loss: 1.0676, D Loss: 1.1792, D Accuracy: 0.7155 Epoch 70/500 - G Loss: 1.0981, D Loss: 1.4013, D Accuracy: 0.5603 Epoch 71/500 - G Loss: 1.1272, D Loss: 1.2036, D Accuracy: 0.6638 Epoch 72/500 - G Loss: 1.0104, D Loss: 1.1686, D Accuracy: 0.6552 Epoch 73/500 - G Loss: 1.0523, D Loss: 1.1516, D Accuracy: 0.7414 Epoch 74/500 - G Loss: 0.8911, D Loss: 1.2965, D Accuracy: 0.6638 Epoch 75/500 - G Loss: 1.1084, D Loss: 1.1397, D Accuracy: 0.7155 Epoch 76/500 - G Loss: 1.1758, D Loss: 1.1476, D Accuracy: 0.7500 Epoch 77/500 - G Loss: 1.3064, D Loss: 1.1857, D Accuracy: 0.6552 Epoch 78/500 - G Loss: 1.0909, D Loss: 1.2202, D Accuracy: 0.6552 Epoch 79/500 - G Loss: 1.1010, D Loss: 1.2190, D Accuracy: 0.6552 Epoch 80/500 - G Loss: 1.3314, D Loss: 1.1871, D Accuracy: 0.6724 Epoch 81/500 - G Loss: 1.2030, D Loss: 1.2584, D Accuracy: 0.6638 Epoch 82/500 - G Loss: 1.1292, D Loss: 1.2303, D Accuracy: 0.6379 Epoch 83/500 - G Loss: 0.9828, D Loss: 1.4291, D Accuracy: 0.5431 Epoch 84/500 - G Loss: 1.1955, D Loss: 1.2232, D Accuracy: 0.6897 Epoch 85/500 - G Loss: 1.0716, D Loss: 1.0886, D Accuracy: 0.7500 Epoch 86/500 - G Loss: 1.1147, D Loss: 1.0765, D Accuracy: 0.7759 Epoch 87/500 - G Loss: 1.1026, D Loss: 1.2484, D Accuracy: 0.6121 Epoch 88/500 - G Loss: 1.0111, D Loss: 1.2804, D Accuracy: 0.6293 Epoch 89/500 - G Loss: 1.1658, D Loss: 1.0940, D Accuracy: 0.7500 Epoch 90/500 - G Loss: 1.1022, D Loss: 1.2891, D Accuracy: 0.6121 Epoch 91/500 - G Loss: 1.0267, D Loss: 1.2348, D Accuracy: 0.6724 Epoch 92/500 - G Loss: 1.3860, D Loss: 1.0654, D Accuracy: 0.7241 Epoch 93/500 - G Loss: 1.2590, D Loss: 1.0400, D Accuracy: 0.7845 Epoch 94/500 - G Loss: 0.7062, D Loss: 1.6018, D Accuracy: 0.3879 Epoch 95/500 - G Loss: 1.0112, D Loss: 1.1224, D Accuracy: 0.6810 Epoch 96/500 - G Loss: 1.1611, D Loss: 1.2140, D Accuracy: 0.7155 Epoch 97/500 - G Loss: 1.5724, D Loss: 1.2651, D Accuracy: 0.5690 Epoch 98/500 - G Loss: 1.2581, D Loss: 1.0261, D Accuracy: 0.7845 Epoch 99/500 - G Loss: 0.8269, D Loss: 1.2528, D Accuracy: 0.6034 Epoch 100/500 - G Loss: 1.2202, D Loss: 1.1331, D Accuracy: 0.6983 Epoch 101/500 - G Loss: 1.0128, D Loss: 1.1347, D Accuracy: 0.7155 Epoch 102/500 - G Loss: 1.3316, D Loss: 1.0777, D Accuracy: 0.7241 Epoch 103/500 - G Loss: 1.3119, D Loss: 1.0582, D Accuracy: 0.7759 Epoch 104/500 - G Loss: 1.0312, D Loss: 1.0957, D Accuracy: 0.8017 Epoch 105/500 - G Loss: 0.9739, D Loss: 1.2844, D Accuracy: 0.6466 Epoch 106/500 - G Loss: 1.2597, D Loss: 1.2492, D Accuracy: 0.6293 Epoch 107/500 - G Loss: 0.6326, D Loss: 1.4496, D Accuracy: 0.5603 Epoch 108/500 - G Loss: 0.9085, D Loss: 1.5704, D Accuracy: 0.4914 Epoch 109/500 - G Loss: 1.1531, D Loss: 1.3837, D Accuracy: 0.5862 Epoch 110/500 - G Loss: 0.7706, D Loss: 1.2192, D Accuracy: 0.7241 Epoch 111/500 - G Loss: 0.9291, D Loss: 1.3082, D Accuracy: 0.6466 Epoch 112/500 - G Loss: 1.2246, D Loss: 1.2213, D Accuracy: 0.6293 Epoch 113/500 - G Loss: 1.2722, D Loss: 1.1679, D Accuracy: 0.6983 Epoch 114/500 - G Loss: 0.9618, D Loss: 1.2361, D Accuracy: 0.6810 Epoch 115/500 - G Loss: 0.9559, D Loss: 1.0942, D Accuracy: 0.8103 Epoch 116/500 - G Loss: 0.5707, D Loss: 1.5254, D Accuracy: 0.5259 Epoch 117/500 - G Loss: 1.2462, D Loss: 1.0634, D Accuracy: 0.7672 Epoch 118/500 - G Loss: 1.2787, D Loss: 1.1966, D Accuracy: 0.6897 Epoch 119/500 - G Loss: 1.3703, D Loss: 0.8909, D Accuracy: 0.8534 Epoch 120/500 - G Loss: 1.2018, D Loss: 0.9933, D Accuracy: 0.8362 Epoch 121/500 - G Loss: 0.8278, D Loss: 1.2092, D Accuracy: 0.6983 Epoch 122/500 - G Loss: 0.9018, D Loss: 1.4993, D Accuracy: 0.4828 Epoch 123/500 - G Loss: 0.6606, D Loss: 1.1979, D Accuracy: 0.6810 Epoch 124/500 - G Loss: 1.1741, D Loss: 1.2668, D Accuracy: 0.6810 Epoch 125/500 - G Loss: 0.8065, D Loss: 1.5933, D Accuracy: 0.4655 Epoch 126/500 - G Loss: 1.2038, D Loss: 1.1638, D Accuracy: 0.7155 Epoch 127/500 - G Loss: 1.1594, D Loss: 1.1844, D Accuracy: 0.6810 Epoch 128/500 - G Loss: 1.5923, D Loss: 1.3692, D Accuracy: 0.5776 Epoch 129/500 - G Loss: 1.1599, D Loss: 1.4715, D Accuracy: 0.5000 Epoch 130/500 - G Loss: 0.7403, D Loss: 1.2685, D Accuracy: 0.7069 Epoch 131/500 - G Loss: 1.6806, D Loss: 1.0579, D Accuracy: 0.7155 Epoch 132/500 - G Loss: 1.1753, D Loss: 0.9714, D Accuracy: 0.8966 Epoch 133/500 - G Loss: 1.0940, D Loss: 1.5762, D Accuracy: 0.4655 Epoch 134/500 - G Loss: 0.8164, D Loss: 1.2147, D Accuracy: 0.6638 Epoch 135/500 - G Loss: 0.8254, D Loss: 1.3514, D Accuracy: 0.6034 Epoch 136/500 - G Loss: 1.5736, D Loss: 1.1204, D Accuracy: 0.6810 Epoch 137/500 - G Loss: 0.6991, D Loss: 1.4621, D Accuracy: 0.5948 Epoch 138/500 - G Loss: 1.1667, D Loss: 1.1819, D Accuracy: 0.6810 Epoch 139/500 - G Loss: 1.6789, D Loss: 0.9761, D Accuracy: 0.8017 Epoch 140/500 - G Loss: 1.9827, D Loss: 0.9793, D Accuracy: 0.7328 Epoch 141/500 - G Loss: 1.1495, D Loss: 1.0690, D Accuracy: 0.7845 Epoch 142/500 - G Loss: 0.9658, D Loss: 1.2397, D Accuracy: 0.6810 Epoch 143/500 - G Loss: 1.5757, D Loss: 1.2564, D Accuracy: 0.6034 Epoch 144/500 - G Loss: 0.8276, D Loss: 1.1366, D Accuracy: 0.7155 Epoch 145/500 - G Loss: 1.2081, D Loss: 1.0780, D Accuracy: 0.7586 Epoch 146/500 - G Loss: 1.2685, D Loss: 1.3047, D Accuracy: 0.6466 Epoch 147/500 - G Loss: 0.9562, D Loss: 1.0913, D Accuracy: 0.7845 Epoch 148/500 - G Loss: 1.4680, D Loss: 0.8975, D Accuracy: 0.8534 Epoch 149/500 - G Loss: 0.9306, D Loss: 1.1137, D Accuracy: 0.7931 Epoch 150/500 - G Loss: 1.0437, D Loss: 1.3196, D Accuracy: 0.6638 Epoch 151/500 - G Loss: 0.9763, D Loss: 1.2797, D Accuracy: 0.6897 Epoch 152/500 - G Loss: 1.5077, D Loss: 1.0187, D Accuracy: 0.7672 Epoch 153/500 - G Loss: 1.3256, D Loss: 0.9212, D Accuracy: 0.8793 Epoch 154/500 - G Loss: 0.9644, D Loss: 1.0331, D Accuracy: 0.8276 Epoch 155/500 - G Loss: 1.0546, D Loss: 1.1029, D Accuracy: 0.7931 Epoch 156/500 - G Loss: 0.8997, D Loss: 1.3299, D Accuracy: 0.6552 Epoch 157/500 - G Loss: 1.5255, D Loss: 1.0305, D Accuracy: 0.7586 Epoch 158/500 - G Loss: 1.5143, D Loss: 0.9273, D Accuracy: 0.8621 Epoch 159/500 - G Loss: 0.9535, D Loss: 1.2579, D Accuracy: 0.6983 Epoch 160/500 - G Loss: 1.7096, D Loss: 1.1883, D Accuracy: 0.6810 Epoch 161/500 - G Loss: 1.0707, D Loss: 1.0835, D Accuracy: 0.7672 Epoch 162/500 - G Loss: 0.8188, D Loss: 1.2077, D Accuracy: 0.7241 Epoch 163/500 - G Loss: 0.8360, D Loss: 1.1732, D Accuracy: 0.7155 Epoch 164/500 - G Loss: 1.6852, D Loss: 0.8288, D Accuracy: 0.8879 Epoch 165/500 - G Loss: 1.3758, D Loss: 1.0259, D Accuracy: 0.7845 Epoch 166/500 - G Loss: 1.1102, D Loss: 1.3133, D Accuracy: 0.6034 Epoch 167/500 - G Loss: 1.6155, D Loss: 0.9796, D Accuracy: 0.7759 Epoch 168/500 - G Loss: 0.9395, D Loss: 1.0882, D Accuracy: 0.7759 Epoch 169/500 - G Loss: 1.2064, D Loss: 1.1815, D Accuracy: 0.6724 Epoch 170/500 - G Loss: 1.2787, D Loss: 1.0428, D Accuracy: 0.7759 Epoch 171/500 - G Loss: 0.9332, D Loss: 1.3685, D Accuracy: 0.5948 Epoch 172/500 - G Loss: 1.6300, D Loss: 0.8583, D Accuracy: 0.8793 Epoch 173/500 - G Loss: 1.6658, D Loss: 0.9026, D Accuracy: 0.8017 Epoch 174/500 - G Loss: 1.0923, D Loss: 1.2662, D Accuracy: 0.6552 Epoch 175/500 - G Loss: 1.7875, D Loss: 1.3802, D Accuracy: 0.5862 Epoch 176/500 - G Loss: 1.4442, D Loss: 1.0505, D Accuracy: 0.7155 Epoch 177/500 - G Loss: 0.5536, D Loss: 1.5143, D Accuracy: 0.6207 Epoch 178/500 - G Loss: 1.2652, D Loss: 1.1165, D Accuracy: 0.7241 Epoch 179/500 - G Loss: 1.0647, D Loss: 0.9711, D Accuracy: 0.8534 Epoch 180/500 - G Loss: 0.6066, D Loss: 1.4533, D Accuracy: 0.6638 Epoch 181/500 - G Loss: 1.5876, D Loss: 0.9467, D Accuracy: 0.8017 Epoch 182/500 - G Loss: 1.1076, D Loss: 1.2623, D Accuracy: 0.6724 Epoch 183/500 - G Loss: 1.4048, D Loss: 1.4932, D Accuracy: 0.5259 Epoch 184/500 - G Loss: 1.3998, D Loss: 1.2009, D Accuracy: 0.6207 Epoch 185/500 - G Loss: 0.8328, D Loss: 1.1920, D Accuracy: 0.7241 Epoch 186/500 - G Loss: 1.7116, D Loss: 0.7234, D Accuracy: 0.9655 Epoch 187/500 - G Loss: 1.1984, D Loss: 1.1863, D Accuracy: 0.6897 Epoch 188/500 - G Loss: 1.7094, D Loss: 1.0851, D Accuracy: 0.6552 Epoch 189/500 - G Loss: 1.4787, D Loss: 0.8170, D Accuracy: 0.9310 Epoch 190/500 - G Loss: 1.1124, D Loss: 1.0810, D Accuracy: 0.7586 Epoch 191/500 - G Loss: 1.5540, D Loss: 1.0482, D Accuracy: 0.7241 Epoch 192/500 - G Loss: 0.6759, D Loss: 1.3082, D Accuracy: 0.6552 Epoch 193/500 - G Loss: 0.9521, D Loss: 1.2163, D Accuracy: 0.6897 Epoch 194/500 - G Loss: 1.4459, D Loss: 1.1312, D Accuracy: 0.6810 Epoch 195/500 - G Loss: 1.1671, D Loss: 0.9839, D Accuracy: 0.8103 Epoch 196/500 - G Loss: 1.4479, D Loss: 0.7815, D Accuracy: 0.9224 Epoch 197/500 - G Loss: 1.3064, D Loss: 0.9017, D Accuracy: 0.8793 Epoch 198/500 - G Loss: 1.7534, D Loss: 1.2258, D Accuracy: 0.6121 Epoch 199/500 - G Loss: 1.0154, D Loss: 0.9440, D Accuracy: 0.8879 Epoch 200/500 - G Loss: 0.9069, D Loss: 1.1015, D Accuracy: 0.7328 Epoch 201/500 - G Loss: 1.2647, D Loss: 1.1138, D Accuracy: 0.7069 Epoch 202/500 - G Loss: 2.0255, D Loss: 1.0008, D Accuracy: 0.7069 Epoch 203/500 - G Loss: 1.2011, D Loss: 1.3572, D Accuracy: 0.5776 Epoch 204/500 - G Loss: 0.9053, D Loss: 1.0255, D Accuracy: 0.8190 Epoch 205/500 - G Loss: 1.9049, D Loss: 1.0414, D Accuracy: 0.7155 Epoch 206/500 - G Loss: 1.1472, D Loss: 1.0871, D Accuracy: 0.7586 Epoch 207/500 - G Loss: 1.2695, D Loss: 0.9105, D Accuracy: 0.8534 Epoch 208/500 - G Loss: 1.1776, D Loss: 0.8666, D Accuracy: 0.8707 Epoch 209/500 - G Loss: 1.2648, D Loss: 0.8780, D Accuracy: 0.9052 Epoch 210/500 - G Loss: 1.3019, D Loss: 0.9412, D Accuracy: 0.8017 Epoch 211/500 - G Loss: 1.1235, D Loss: 1.1112, D Accuracy: 0.7586 Epoch 212/500 - G Loss: 1.3611, D Loss: 0.9567, D Accuracy: 0.8448 Epoch 213/500 - G Loss: 1.5136, D Loss: 0.9777, D Accuracy: 0.7931 Epoch 214/500 - G Loss: 2.2629, D Loss: 2.3001, D Accuracy: 0.5000 Epoch 215/500 - G Loss: 1.2351, D Loss: 0.8264, D Accuracy: 0.9224 Epoch 216/500 - G Loss: 1.6654, D Loss: 1.0278, D Accuracy: 0.7328 Epoch 217/500 - G Loss: 1.1239, D Loss: 1.0227, D Accuracy: 0.8017 Epoch 218/500 - G Loss: 1.9454, D Loss: 1.0787, D Accuracy: 0.6983 Epoch 219/500 - G Loss: 2.2704, D Loss: 1.0593, D Accuracy: 0.6897 Epoch 220/500 - G Loss: 1.2801, D Loss: 1.0837, D Accuracy: 0.7241 Epoch 221/500 - G Loss: 1.8629, D Loss: 0.8458, D Accuracy: 0.8190 Epoch 222/500 - G Loss: 1.3563, D Loss: 0.8483, D Accuracy: 0.8793 Epoch 223/500 - G Loss: 1.2831, D Loss: 0.9298, D Accuracy: 0.8534 Epoch 224/500 - G Loss: 2.2649, D Loss: 0.8044, D Accuracy: 0.8362 Epoch 225/500 - G Loss: 1.6099, D Loss: 0.9182, D Accuracy: 0.8017 Epoch 226/500 - G Loss: 0.8872, D Loss: 1.1987, D Accuracy: 0.7500 Epoch 227/500 - G Loss: 0.6399, D Loss: 1.3078, D Accuracy: 0.6552 Epoch 228/500 - G Loss: 0.8794, D Loss: 1.0538, D Accuracy: 0.7759 Epoch 229/500 - G Loss: 2.1775, D Loss: 0.6068, D Accuracy: 0.9655 Epoch 230/500 - G Loss: 1.8348, D Loss: 1.6300, D Accuracy: 0.5172 Epoch 231/500 - G Loss: 1.4236, D Loss: 1.0243, D Accuracy: 0.7586 Epoch 232/500 - G Loss: 1.3293, D Loss: 0.9073, D Accuracy: 0.8534 Epoch 233/500 - G Loss: 1.6856, D Loss: 0.9793, D Accuracy: 0.8190 Epoch 234/500 - G Loss: 0.4023, D Loss: 1.8662, D Accuracy: 0.4828 Epoch 235/500 - G Loss: 1.5707, D Loss: 1.1682, D Accuracy: 0.6466 Epoch 236/500 - G Loss: 1.3396, D Loss: 1.0535, D Accuracy: 0.7672 Epoch 237/500 - G Loss: 2.2266, D Loss: 0.7949, D Accuracy: 0.8103 Epoch 238/500 - G Loss: 1.6594, D Loss: 0.9432, D Accuracy: 0.8017 Epoch 239/500 - G Loss: 0.7814, D Loss: 1.6933, D Accuracy: 0.4138 Epoch 240/500 - G Loss: 1.1227, D Loss: 1.1047, D Accuracy: 0.7586 Epoch 241/500 - G Loss: 1.5246, D Loss: 0.6862, D Accuracy: 0.9741 Epoch 242/500 - G Loss: 1.2655, D Loss: 0.8752, D Accuracy: 0.8966 Epoch 243/500 - G Loss: 1.6491, D Loss: 1.0839, D Accuracy: 0.7414 Epoch 244/500 - G Loss: 1.8008, D Loss: 0.7505, D Accuracy: 0.9138 Epoch 245/500 - G Loss: 1.3293, D Loss: 0.9393, D Accuracy: 0.8534 Epoch 246/500 - G Loss: 0.9572, D Loss: 0.9786, D Accuracy: 0.7759 Epoch 247/500 - G Loss: 2.0245, D Loss: 0.8235, D Accuracy: 0.8534 Epoch 248/500 - G Loss: 0.6757, D Loss: 1.4316, D Accuracy: 0.5259 Epoch 249/500 - G Loss: 1.2110, D Loss: 1.1317, D Accuracy: 0.7500 Epoch 250/500 - G Loss: 1.1743, D Loss: 0.9921, D Accuracy: 0.8103 Epoch 251/500 - G Loss: 2.9523, D Loss: 1.2333, D Accuracy: 0.5690 Epoch 252/500 - G Loss: 1.8905, D Loss: 0.9331, D Accuracy: 0.7845 Epoch 253/500 - G Loss: 1.3717, D Loss: 0.9484, D Accuracy: 0.8103 Epoch 254/500 - G Loss: 0.7949, D Loss: 1.2636, D Accuracy: 0.6466 Epoch 255/500 - G Loss: 1.6057, D Loss: 0.9609, D Accuracy: 0.7845 Epoch 256/500 - G Loss: 0.7407, D Loss: 1.4298, D Accuracy: 0.5603 Epoch 257/500 - G Loss: 1.7712, D Loss: 1.1616, D Accuracy: 0.6379 Epoch 258/500 - G Loss: 1.2415, D Loss: 0.9015, D Accuracy: 0.8362 Epoch 259/500 - G Loss: 2.1329, D Loss: 1.5214, D Accuracy: 0.5517 Epoch 260/500 - G Loss: 0.5810, D Loss: 1.5451, D Accuracy: 0.5172 Epoch 261/500 - G Loss: 2.3836, D Loss: 1.1372, D Accuracy: 0.6379 Epoch 262/500 - G Loss: 1.3366, D Loss: 0.8358, D Accuracy: 0.9052 Epoch 263/500 - G Loss: 1.3807, D Loss: 0.8040, D Accuracy: 0.9310 Epoch 264/500 - G Loss: 0.8142, D Loss: 1.2683, D Accuracy: 0.6897 Epoch 265/500 - G Loss: 0.6892, D Loss: 1.2607, D Accuracy: 0.7241 Epoch 266/500 - G Loss: 1.9082, D Loss: 1.5383, D Accuracy: 0.5086 Epoch 267/500 - G Loss: 1.7222, D Loss: 1.0390, D Accuracy: 0.7155 Epoch 268/500 - G Loss: 1.0630, D Loss: 1.2229, D Accuracy: 0.6983 Epoch 269/500 - G Loss: 0.5121, D Loss: 1.4768, D Accuracy: 0.6121 Epoch 270/500 - G Loss: 1.0411, D Loss: 1.0499, D Accuracy: 0.7759 Epoch 271/500 - G Loss: 0.7993, D Loss: 1.2010, D Accuracy: 0.7328 Epoch 272/500 - G Loss: 1.7073, D Loss: 1.2800, D Accuracy: 0.6293 Epoch 273/500 - G Loss: 1.2935, D Loss: 1.0308, D Accuracy: 0.8017 Epoch 274/500 - G Loss: 2.0412, D Loss: 1.0788, D Accuracy: 0.6724 Epoch 275/500 - G Loss: 3.0979, D Loss: 1.5671, D Accuracy: 0.5172 Epoch 276/500 - G Loss: 2.2352, D Loss: 1.3316, D Accuracy: 0.5948 Epoch 277/500 - G Loss: 1.3458, D Loss: 0.8251, D Accuracy: 0.8879 Epoch 278/500 - G Loss: 0.9756, D Loss: 1.2735, D Accuracy: 0.6552 Epoch 279/500 - G Loss: 2.5080, D Loss: 0.9399, D Accuracy: 0.7155 Epoch 280/500 - G Loss: 2.1340, D Loss: 0.6345, D Accuracy: 0.9828 Epoch 281/500 - G Loss: 0.9581, D Loss: 1.0126, D Accuracy: 0.8448 Epoch 282/500 - G Loss: 1.9602, D Loss: 0.8579, D Accuracy: 0.7931 Epoch 283/500 - G Loss: 1.1244, D Loss: 1.0117, D Accuracy: 0.7931 Epoch 284/500 - G Loss: 2.1752, D Loss: 0.8970, D Accuracy: 0.7759 Epoch 285/500 - G Loss: 1.3881, D Loss: 1.4335, D Accuracy: 0.5862 Epoch 286/500 - G Loss: 1.2215, D Loss: 0.9987, D Accuracy: 0.8448 Epoch 287/500 - G Loss: 1.4011, D Loss: 0.7431, D Accuracy: 0.9569 Epoch 288/500 - G Loss: 0.3991, D Loss: 1.6987, D Accuracy: 0.5862 Epoch 289/500 - G Loss: 2.3391, D Loss: 0.7815, D Accuracy: 0.8448 Epoch 290/500 - G Loss: 2.6161, D Loss: 1.1553, D Accuracy: 0.6207 Epoch 291/500 - G Loss: 1.7332, D Loss: 0.8675, D Accuracy: 0.8362 Epoch 292/500 - G Loss: 2.7036, D Loss: 1.6742, D Accuracy: 0.5259 Epoch 293/500 - G Loss: 0.9837, D Loss: 1.1242, D Accuracy: 0.7672 Epoch 294/500 - G Loss: 1.9828, D Loss: 0.6818, D Accuracy: 0.9138 Epoch 295/500 - G Loss: 1.1129, D Loss: 0.9759, D Accuracy: 0.8793 Epoch 296/500 - G Loss: 1.4248, D Loss: 0.9641, D Accuracy: 0.7931 Epoch 297/500 - G Loss: 0.5286, D Loss: 1.4247, D Accuracy: 0.6034 Epoch 298/500 - G Loss: 1.7730, D Loss: 0.7783, D Accuracy: 0.9224 Epoch 299/500 - G Loss: 1.7825, D Loss: 0.6950, D Accuracy: 0.9310 Epoch 300/500 - G Loss: 1.9055, D Loss: 1.1855, D Accuracy: 0.6466 Epoch 301/500 - G Loss: 1.9380, D Loss: 0.6153, D Accuracy: 0.9655 Epoch 302/500 - G Loss: 2.0871, D Loss: 1.1173, D Accuracy: 0.7155 Epoch 303/500 - G Loss: 1.2640, D Loss: 0.9885, D Accuracy: 0.8190 Epoch 304/500 - G Loss: 1.5113, D Loss: 1.0010, D Accuracy: 0.7328 Epoch 305/500 - G Loss: 1.1989, D Loss: 0.8483, D Accuracy: 0.9310 Epoch 306/500 - G Loss: 1.3037, D Loss: 1.0331, D Accuracy: 0.7586 Epoch 307/500 - G Loss: 1.8637, D Loss: 0.8842, D Accuracy: 0.8103 Epoch 308/500 - G Loss: 0.7890, D Loss: 1.3417, D Accuracy: 0.6897 Epoch 309/500 - G Loss: 1.9248, D Loss: 0.7200, D Accuracy: 0.8621 Epoch 310/500 - G Loss: 1.1907, D Loss: 1.6515, D Accuracy: 0.4914 Epoch 311/500 - G Loss: 1.8355, D Loss: 1.2112, D Accuracy: 0.5948 Epoch 312/500 - G Loss: 2.4692, D Loss: 1.1731, D Accuracy: 0.6293 Epoch 313/500 - G Loss: 0.6773, D Loss: 1.4422, D Accuracy: 0.5517 Epoch 314/500 - G Loss: 1.0667, D Loss: 1.3730, D Accuracy: 0.5862 Epoch 315/500 - G Loss: 2.3662, D Loss: 1.6198, D Accuracy: 0.5517 Epoch 316/500 - G Loss: 0.4871, D Loss: 1.5943, D Accuracy: 0.6034 Epoch 317/500 - G Loss: 1.2155, D Loss: 0.9639, D Accuracy: 0.8103 Epoch 318/500 - G Loss: 2.6194, D Loss: 1.8488, D Accuracy: 0.5172 Epoch 319/500 - G Loss: 2.5960, D Loss: 1.4331, D Accuracy: 0.5345 Epoch 320/500 - G Loss: 0.8174, D Loss: 1.0852, D Accuracy: 0.8017 Epoch 321/500 - G Loss: 1.1947, D Loss: 0.9600, D Accuracy: 0.8448 Epoch 322/500 - G Loss: 1.6012, D Loss: 0.8801, D Accuracy: 0.8621 Epoch 323/500 - G Loss: 1.9379, D Loss: 2.7546, D Accuracy: 0.5000 Epoch 324/500 - G Loss: 1.7049, D Loss: 0.6537, D Accuracy: 0.9828 Epoch 325/500 - G Loss: 1.3706, D Loss: 0.7796, D Accuracy: 0.9569 Epoch 326/500 - G Loss: 1.8267, D Loss: 0.8785, D Accuracy: 0.8621 Epoch 327/500 - G Loss: 1.2814, D Loss: 0.9999, D Accuracy: 0.8103 Epoch 328/500 - G Loss: 0.6240, D Loss: 1.5178, D Accuracy: 0.5948 Epoch 329/500 - G Loss: 1.4573, D Loss: 0.7672, D Accuracy: 0.9397 Epoch 330/500 - G Loss: 1.4517, D Loss: 1.1008, D Accuracy: 0.7069 Epoch 331/500 - G Loss: 2.4404, D Loss: 0.8917, D Accuracy: 0.7500 Epoch 332/500 - G Loss: 0.9727, D Loss: 1.0025, D Accuracy: 0.8362 Epoch 333/500 - G Loss: 1.9170, D Loss: 1.2456, D Accuracy: 0.5862 Epoch 334/500 - G Loss: 1.1789, D Loss: 0.9876, D Accuracy: 0.8276 Epoch 335/500 - G Loss: 1.0005, D Loss: 1.0943, D Accuracy: 0.7931 Epoch 336/500 - G Loss: 1.5989, D Loss: 0.9686, D Accuracy: 0.7931 Epoch 337/500 - G Loss: 0.8403, D Loss: 1.0371, D Accuracy: 0.7759 Epoch 338/500 - G Loss: 1.7181, D Loss: 0.8352, D Accuracy: 0.8534 Epoch 339/500 - G Loss: 1.8273, D Loss: 0.9870, D Accuracy: 0.7672 Epoch 340/500 - G Loss: 0.9976, D Loss: 0.9676, D Accuracy: 0.8448 Epoch 341/500 - G Loss: 1.3371, D Loss: 0.7841, D Accuracy: 0.9224 Epoch 342/500 - G Loss: 1.4235, D Loss: 0.7259, D Accuracy: 0.9483 Epoch 343/500 - G Loss: 0.4412, D Loss: 1.6088, D Accuracy: 0.5517 Epoch 344/500 - G Loss: 1.7365, D Loss: 1.0578, D Accuracy: 0.6983 Epoch 345/500 - G Loss: 2.6889, D Loss: 0.7359, D Accuracy: 0.8362 Epoch 346/500 - G Loss: 0.9099, D Loss: 1.2344, D Accuracy: 0.6983 Epoch 347/500 - G Loss: 1.6822, D Loss: 0.8925, D Accuracy: 0.8017 Epoch 348/500 - G Loss: 1.4494, D Loss: 0.7167, D Accuracy: 0.9483 Epoch 349/500 - G Loss: 1.3755, D Loss: 1.3946, D Accuracy: 0.5948 Epoch 350/500 - G Loss: 1.3483, D Loss: 0.9629, D Accuracy: 0.7672 Epoch 351/500 - G Loss: 1.2312, D Loss: 0.9389, D Accuracy: 0.8362 Epoch 352/500 - G Loss: 1.6700, D Loss: 0.9221, D Accuracy: 0.8190 Epoch 353/500 - G Loss: 1.2460, D Loss: 1.6698, D Accuracy: 0.5345 Epoch 354/500 - G Loss: 1.2822, D Loss: 0.9954, D Accuracy: 0.8103 Epoch 355/500 - G Loss: 1.5569, D Loss: 1.0538, D Accuracy: 0.7500 Epoch 356/500 - G Loss: 0.9662, D Loss: 1.0217, D Accuracy: 0.7931 Epoch 357/500 - G Loss: 1.5596, D Loss: 0.8883, D Accuracy: 0.8362 Epoch 358/500 - G Loss: 1.4699, D Loss: 0.8918, D Accuracy: 0.8534 Epoch 359/500 - G Loss: 1.0171, D Loss: 0.9622, D Accuracy: 0.8621 Epoch 360/500 - G Loss: 1.5561, D Loss: 0.9382, D Accuracy: 0.7845 Epoch 361/500 - G Loss: 2.4940, D Loss: 0.7224, D Accuracy: 0.8621 Epoch 362/500 - G Loss: 1.0682, D Loss: 0.9385, D Accuracy: 0.8448 Epoch 363/500 - G Loss: 1.3527, D Loss: 1.0087, D Accuracy: 0.7759 Epoch 364/500 - G Loss: 1.7438, D Loss: 0.7089, D Accuracy: 0.9310 Epoch 365/500 - G Loss: 1.1304, D Loss: 0.9042, D Accuracy: 0.8621 Epoch 366/500 - G Loss: 1.4793, D Loss: 1.1207, D Accuracy: 0.7414 Epoch 367/500 - G Loss: 1.2927, D Loss: 1.1971, D Accuracy: 0.6724 Epoch 368/500 - G Loss: 1.8927, D Loss: 0.6707, D Accuracy: 0.9569 Epoch 369/500 - G Loss: 1.4897, D Loss: 0.7543, D Accuracy: 0.9052 Epoch 370/500 - G Loss: 1.3741, D Loss: 0.7490, D Accuracy: 0.9224 Epoch 371/500 - G Loss: 1.2888, D Loss: 0.7815, D Accuracy: 0.9224 Epoch 372/500 - G Loss: 2.0102, D Loss: 0.6856, D Accuracy: 0.9310 Epoch 373/500 - G Loss: 1.5003, D Loss: 0.8286, D Accuracy: 0.8966 Epoch 374/500 - G Loss: 0.3611, D Loss: 1.9997, D Accuracy: 0.4569 Epoch 375/500 - G Loss: 0.9225, D Loss: 1.0960, D Accuracy: 0.7586 Epoch 376/500 - G Loss: 1.2579, D Loss: 0.9963, D Accuracy: 0.8534 Epoch 377/500 - G Loss: 1.7366, D Loss: 0.8625, D Accuracy: 0.8534 Epoch 378/500 - G Loss: 2.3901, D Loss: 0.7963, D Accuracy: 0.8362 Epoch 379/500 - G Loss: 2.4500, D Loss: 0.8721, D Accuracy: 0.8017 Epoch 380/500 - G Loss: 1.9563, D Loss: 1.2845, D Accuracy: 0.5948 Epoch 381/500 - G Loss: 3.0143, D Loss: 0.4293, D Accuracy: 1.0000 Epoch 382/500 - G Loss: 2.6042, D Loss: 0.4478, D Accuracy: 1.0000 Epoch 383/500 - G Loss: 1.6131, D Loss: 2.5373, D Accuracy: 0.4569 Epoch 384/500 - G Loss: 1.4480, D Loss: 0.8225, D Accuracy: 0.8707 Epoch 385/500 - G Loss: 1.2357, D Loss: 0.8083, D Accuracy: 0.9138 Epoch 386/500 - G Loss: 1.7881, D Loss: 0.6532, D Accuracy: 0.9483 Epoch 387/500 - G Loss: 1.4229, D Loss: 1.1427, D Accuracy: 0.7155 Epoch 388/500 - G Loss: 2.0320, D Loss: 0.6753, D Accuracy: 0.9310 Epoch 389/500 - G Loss: 2.0949, D Loss: 0.9555, D Accuracy: 0.7500 Epoch 390/500 - G Loss: 2.6700, D Loss: 0.8959, D Accuracy: 0.7500 Epoch 391/500 - G Loss: 2.3845, D Loss: 1.0784, D Accuracy: 0.6552 Epoch 392/500 - G Loss: 1.3942, D Loss: 0.8032, D Accuracy: 0.8879 Epoch 393/500 - G Loss: 0.6069, D Loss: 1.3478, D Accuracy: 0.6466 Epoch 394/500 - G Loss: 1.6715, D Loss: 1.3456, D Accuracy: 0.6034 Epoch 395/500 - G Loss: 2.5381, D Loss: 0.9593, D Accuracy: 0.7328 Epoch 396/500 - G Loss: 2.6782, D Loss: 0.6687, D Accuracy: 0.9138 Epoch 397/500 - G Loss: 0.7296, D Loss: 1.2157, D Accuracy: 0.7069 Epoch 398/500 - G Loss: 2.5379, D Loss: 0.9405, D Accuracy: 0.7241 Epoch 399/500 - G Loss: 1.5060, D Loss: 0.9579, D Accuracy: 0.8276 Epoch 400/500 - G Loss: 2.0026, D Loss: 0.8782, D Accuracy: 0.7845 Epoch 401/500 - G Loss: 1.4536, D Loss: 0.9506, D Accuracy: 0.8103 Epoch 402/500 - G Loss: 1.4585, D Loss: 0.8001, D Accuracy: 0.8793 Epoch 403/500 - G Loss: 1.2276, D Loss: 0.9378, D Accuracy: 0.8276 Epoch 404/500 - G Loss: 1.1423, D Loss: 1.2974, D Accuracy: 0.6121 Epoch 405/500 - G Loss: 1.4566, D Loss: 0.9216, D Accuracy: 0.8103 Epoch 406/500 - G Loss: 1.3252, D Loss: 1.0388, D Accuracy: 0.7241 Epoch 407/500 - G Loss: 1.9218, D Loss: 0.5847, D Accuracy: 0.9828 Epoch 408/500 - G Loss: 1.6592, D Loss: 0.7719, D Accuracy: 0.8966 Epoch 409/500 - G Loss: 0.8260, D Loss: 1.2959, D Accuracy: 0.6724 Epoch 410/500 - G Loss: 2.1683, D Loss: 0.6615, D Accuracy: 0.9224 Epoch 411/500 - G Loss: 1.4573, D Loss: 1.1439, D Accuracy: 0.7414 Epoch 412/500 - G Loss: 1.2064, D Loss: 0.8885, D Accuracy: 0.8534 Epoch 413/500 - G Loss: 1.7199, D Loss: 0.6916, D Accuracy: 0.9310 Epoch 414/500 - G Loss: 1.3647, D Loss: 0.9631, D Accuracy: 0.8362 Epoch 415/500 - G Loss: 1.0756, D Loss: 1.1727, D Accuracy: 0.7069 Epoch 416/500 - G Loss: 1.0820, D Loss: 0.9346, D Accuracy: 0.8707 Epoch 417/500 - G Loss: 1.2532, D Loss: 0.8160, D Accuracy: 0.9052 Epoch 418/500 - G Loss: 1.8389, D Loss: 0.8342, D Accuracy: 0.8621 Epoch 419/500 - G Loss: 2.4466, D Loss: 0.6349, D Accuracy: 0.9224 Epoch 420/500 - G Loss: 2.5567, D Loss: 0.9105, D Accuracy: 0.7069 Epoch 421/500 - G Loss: 1.0270, D Loss: 1.0472, D Accuracy: 0.8103 Epoch 422/500 - G Loss: 1.3948, D Loss: 0.8621, D Accuracy: 0.8879 Epoch 423/500 - G Loss: 2.0609, D Loss: 0.8851, D Accuracy: 0.7845 Epoch 424/500 - G Loss: 1.0314, D Loss: 1.0009, D Accuracy: 0.8190 Epoch 425/500 - G Loss: 2.3613, D Loss: 0.6725, D Accuracy: 0.9310 Epoch 426/500 - G Loss: 1.8086, D Loss: 0.7742, D Accuracy: 0.8966 Epoch 427/500 - G Loss: 1.3833, D Loss: 0.7675, D Accuracy: 0.9224 Epoch 428/500 - G Loss: 1.6622, D Loss: 1.3257, D Accuracy: 0.6034 Epoch 429/500 - G Loss: 1.4599, D Loss: 0.8300, D Accuracy: 0.9052 Epoch 430/500 - G Loss: 2.9726, D Loss: 0.8269, D Accuracy: 0.7759 Epoch 431/500 - G Loss: 1.7455, D Loss: 1.5467, D Accuracy: 0.5603 Epoch 432/500 - G Loss: 1.5106, D Loss: 0.7720, D Accuracy: 0.9052 Epoch 433/500 - G Loss: 1.3774, D Loss: 0.9720, D Accuracy: 0.8448 Epoch 434/500 - G Loss: 1.2383, D Loss: 1.0697, D Accuracy: 0.7500 Epoch 435/500 - G Loss: 1.5411, D Loss: 0.7677, D Accuracy: 0.9224 Epoch 436/500 - G Loss: 1.0858, D Loss: 0.9008, D Accuracy: 0.9224 Epoch 437/500 - G Loss: 1.7207, D Loss: 0.6543, D Accuracy: 0.9914 Epoch 438/500 - G Loss: 1.5959, D Loss: 1.2865, D Accuracy: 0.6121 Epoch 439/500 - G Loss: 1.5933, D Loss: 1.3532, D Accuracy: 0.6034 Epoch 440/500 - G Loss: 0.8522, D Loss: 1.2311, D Accuracy: 0.6293 Epoch 441/500 - G Loss: 1.0361, D Loss: 0.9804, D Accuracy: 0.8276 Epoch 442/500 - G Loss: 1.7734, D Loss: 1.1178, D Accuracy: 0.6207 Epoch 443/500 - G Loss: 1.9823, D Loss: 0.6509, D Accuracy: 0.9569 Epoch 444/500 - G Loss: 1.0427, D Loss: 0.9837, D Accuracy: 0.8190 Epoch 445/500 - G Loss: 3.3184, D Loss: 0.9600, D Accuracy: 0.7328 Epoch 446/500 - G Loss: 0.8075, D Loss: 1.8211, D Accuracy: 0.3793 Epoch 447/500 - G Loss: 0.9789, D Loss: 1.0713, D Accuracy: 0.7845 Epoch 448/500 - G Loss: 2.3015, D Loss: 0.9493, D Accuracy: 0.7241 Epoch 449/500 - G Loss: 1.1842, D Loss: 1.3155, D Accuracy: 0.5862 Epoch 450/500 - G Loss: 1.0738, D Loss: 1.2200, D Accuracy: 0.7155 Epoch 451/500 - G Loss: 1.4788, D Loss: 1.2873, D Accuracy: 0.5862 Epoch 452/500 - G Loss: 1.3375, D Loss: 0.9092, D Accuracy: 0.8448 Epoch 453/500 - G Loss: 2.9443, D Loss: 0.7282, D Accuracy: 0.8448 Epoch 454/500 - G Loss: 2.2525, D Loss: 0.5459, D Accuracy: 0.9828 Epoch 455/500 - G Loss: 1.6349, D Loss: 0.7410, D Accuracy: 0.9224 Epoch 456/500 - G Loss: 1.6376, D Loss: 1.2520, D Accuracy: 0.6724 Epoch 457/500 - G Loss: 2.6050, D Loss: 0.7747, D Accuracy: 0.8276 Epoch 458/500 - G Loss: 1.3164, D Loss: 0.7914, D Accuracy: 0.9052 Epoch 459/500 - G Loss: 2.0489, D Loss: 0.8307, D Accuracy: 0.8190 Epoch 460/500 - G Loss: 1.1526, D Loss: 0.9765, D Accuracy: 0.8534 Epoch 461/500 - G Loss: 1.0292, D Loss: 0.9708, D Accuracy: 0.7759 Epoch 462/500 - G Loss: 2.1464, D Loss: 0.6904, D Accuracy: 0.9052 Epoch 463/500 - G Loss: 1.7275, D Loss: 0.8493, D Accuracy: 0.8621 Epoch 464/500 - G Loss: 1.3677, D Loss: 1.0181, D Accuracy: 0.7845 Epoch 465/500 - G Loss: 1.2288, D Loss: 1.0744, D Accuracy: 0.7845 Epoch 466/500 - G Loss: 0.9436, D Loss: 1.4630, D Accuracy: 0.5776 Epoch 467/500 - G Loss: 1.1249, D Loss: 0.9852, D Accuracy: 0.8276 Epoch 468/500 - G Loss: 2.4398, D Loss: 1.4273, D Accuracy: 0.5431 Epoch 469/500 - G Loss: 2.6008, D Loss: 0.6389, D Accuracy: 0.9224 Epoch 470/500 - G Loss: 1.9903, D Loss: 0.7189, D Accuracy: 0.9224 Epoch 471/500 - G Loss: 1.0887, D Loss: 1.3521, D Accuracy: 0.6121 Epoch 472/500 - G Loss: 1.6635, D Loss: 0.7770, D Accuracy: 0.8966 Epoch 473/500 - G Loss: 2.8015, D Loss: 0.4844, D Accuracy: 0.9828 Epoch 474/500 - G Loss: 2.0518, D Loss: 0.5622, D Accuracy: 0.9741 Epoch 475/500 - G Loss: 1.6571, D Loss: 0.7355, D Accuracy: 0.9138 Epoch 476/500 - G Loss: 2.0605, D Loss: 0.7418, D Accuracy: 0.8793 Epoch 477/500 - G Loss: 1.4310, D Loss: 0.7844, D Accuracy: 0.9052 Epoch 478/500 - G Loss: 0.4889, D Loss: 1.5429, D Accuracy: 0.6034 Epoch 479/500 - G Loss: 0.6989, D Loss: 1.2117, D Accuracy: 0.7328 Epoch 480/500 - G Loss: 1.2749, D Loss: 0.7753, D Accuracy: 0.9569 Epoch 481/500 - G Loss: 2.1913, D Loss: 1.3790, D Accuracy: 0.5862 Epoch 482/500 - G Loss: 1.8508, D Loss: 0.7637, D Accuracy: 0.9052 Epoch 483/500 - G Loss: 3.3675, D Loss: 1.4667, D Accuracy: 0.5603 Epoch 484/500 - G Loss: 1.3755, D Loss: 0.7620, D Accuracy: 0.9138 Epoch 485/500 - G Loss: 1.1533, D Loss: 1.3402, D Accuracy: 0.6121 Epoch 486/500 - G Loss: 1.1391, D Loss: 1.3918, D Accuracy: 0.6207 Epoch 487/500 - G Loss: 1.1627, D Loss: 0.8539, D Accuracy: 0.9052 Epoch 488/500 - G Loss: 2.0213, D Loss: 1.0177, D Accuracy: 0.6552 Epoch 489/500 - G Loss: 0.8691, D Loss: 1.0481, D Accuracy: 0.7672 Epoch 490/500 - G Loss: 1.1182, D Loss: 1.0361, D Accuracy: 0.8017 Epoch 491/500 - G Loss: 2.8888, D Loss: 0.6680, D Accuracy: 0.8793 Epoch 492/500 - G Loss: 2.6421, D Loss: 0.8376, D Accuracy: 0.8017 Epoch 493/500 - G Loss: 1.6317, D Loss: 0.7941, D Accuracy: 0.9655 Epoch 494/500 - G Loss: 1.5677, D Loss: 1.1825, D Accuracy: 0.6810 Epoch 495/500 - G Loss: 2.1767, D Loss: 0.6818, D Accuracy: 0.9310 Epoch 496/500 - G Loss: 1.5596, D Loss: 1.0556, D Accuracy: 0.7586 Epoch 497/500 - G Loss: 2.1113, D Loss: 0.9123, D Accuracy: 0.7672 Epoch 498/500 - G Loss: 2.0538, D Loss: 0.8601, D Accuracy: 0.8017 Epoch 499/500 - G Loss: 0.5795, D Loss: 1.3396, D Accuracy: 0.6293 Epoch 500/500 - G Loss: 2.4462, D Loss: 0.6692, D Accuracy: 0.9224
# ========== Final 160-Image Grid ==========
generate_and_save_all(generator, latent_dim=latent_dim, num_classes=16, images_per_class=10,
save_dir="NEW_DCGAN_generated_images",
grid_path="DCGAN(1)generated_160_grid.png")
generator.save_weights("dcgan_generator_500_weights.h5")
# ========== Final 160-Image Grid ==========
generate_and_save_all(generator, latent_dim=latent_dim, num_classes=16, images_per_class=10,
save_dir="NEW_DCGAN_generated_images",
grid_path="DCGAN(2)generated_160_grid.png")
generator.save_weights("dcgan_generator_500_weights.h5")
import matplotlib.pyplot as plt
plt.figure(figsize=(14, 5))
plt.subplot(1, 3, 1)
plt.plot(g_losses, label="Generator Loss")
plt.title("Generator Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.subplot(1, 3, 2)
plt.plot(d_losses, label="Discriminator Loss")
plt.title("Discriminator Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.subplot(1, 3, 3)
plt.plot(d_accuracies, label="Discriminator Accuracy")
plt.title("Discriminator Accuracy")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.tight_layout()
plt.show()
import matplotlib.pyplot as plt
plt.figure(figsize=(8, 5))
plt.plot(g_losses, label="Generator Loss")
plt.plot(d_losses, label="Discriminator Loss")
plt.title("Generator vs Discriminator Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plt.grid(True)
plt.show()
Final DCGAN Model Results¶
Loss and Accuracy Trends¶
Below are the training curves for the final DCGAN model after 500 epochs:
- Generator Loss – Shows how well the generator is learning to produce realistic images.
- Discriminator Loss – Measures the discriminator’s ability to distinguish between real and fake images.
- Discriminator Accuracy – Tracks the percentage of correct predictions made by the discriminator.
Combined Loss Comparison¶
This plot directly compares Generator Loss and Discriminator Loss over time, allowing us to observe the adversarial training balance.
Observations¶
- The discriminator accuracy stabilizes around 80–90%, indicating a healthy adversarial balance.
- Generator loss fluctuates more widely than discriminator loss, which is expected due to adversarial dynamics.
- The loss gap between generator and discriminator remains moderate, suggesting neither model is overpowering the other.
- Despite noise in the curves, overall training shows both networks actively learning throughout the epochs.